138 research outputs found

    The Effectiveness of Using Thelen’s Model on Acquiring Physical Concepts and Developing Scientific Thinking for Tenth-Grade Students in Jordan

    Get PDF
    The current research aims to know the effect of teaching using Thelen’s Model on acquiring physical concepts and developing scientific thinking among tenth-grade students in Jordan. The researcher used the quasi-experimental method. The sample consisted of 55 female students in the tenth grade for the academic year 2021-2022 from Mutah Secondary School for Girls. The researcher prepared a test for physical concepts and built a test for developing scientific thinking. Their validity and reliability were verified. SPSS software was used to analyze the data statistically. The results showed that the students of the experimental group who studied according to Thelen’s Model outperformed the control group who studied in the usual way in the test of acquiring physical concepts, and in the test of developing scientific thinking with a statistically significant difference. In light of the results, the researcher recommended the necessity of using Thelen’s Model in teaching physics because of its effective impact in acquiring concepts related to this subject and in developing scientific thinking

    Content modelling for human action detection via multidimensional approach

    Get PDF
    Video content analysis is an active research domain due to the availability and the increment of audiovisual data in the digital format. There is a need to automatically extracting video content for efficient access, understanding,browsing and retrieval of videos. To obtain the information that is of interest and to provide better entertainment, tools are needed to help users extract relevant content and to effectively navigate through the large amount of available video information. Existing methods do not seem to attempt to model and estimate the semantic content of the video. Detecting and interpreting human presence,actions and activities is one of the most valuable functions in this proposed framework. The general objectives of this research are to analyze and process the audio-video streams to a robust audiovisual action recognition system by integrating, structuring and accessing multimodal information via multidimensional retrieval and extraction model. The proposed technique characterizes the action scenes by integrating cues obtained from both the audio and video tracks. Information is combined based on visual features (motion,edge, and visual characteristics of objects), audio features and video for recognizing action. This model uses HMM and GMM to provide a framework for fusing these features and to represent the multidimensional structure of the framework. The action-related visual cues are obtained by computing the spatio temporal dynamic activity from the video shots and by abstracting specific visual events. Simultaneously, the audio features are analyzed by locating and compute several sound effects of action events that embedded in the video. Finally, these audio and visual cues are combined to identify the action scenes. Compared with using single source of either visual or audio track alone, such combined audio visual information provides more reliable performance and allows us to understand the story content of movies in more detail. To compare the usefulness of the proposed framework, several experiments were conducted and the results were obtained by using visual features only (77.89% for precision;72.10% for recall), audio features only (62.52% for precision; 48.93% for recall)and combined audiovisual (90.35% for precision; 90.65% for recall)

    3D face recognition using multiple features for local depth information

    Get PDF
    In this paper, we recognized multiple features from the local depth information of distance and angle calculation. These features are calculated from the twelve salient points by considering the distance and angle calculation. Then, fifty three non-independent features are extracted and the discriminating power is used for analyzing these features. The result shows an improvement compared to the previous wor

    Classification of herbs plant diseases via hierarchical dynamic artificial neural network

    Get PDF
    When herbs plants has disease, they can display a range of symptoms such as colored spots, or streaks that can occur on the leaves, stems, and seeds of the plant. These visual symptoms continuously change their color, shape and size as the disease progresses. Once the image of a target is captured digitally, a myriad of image processing algorithms can be used to extract features from it. The usefulness of each of these features will depend on the particular patterns to be highlighted in the image. A key point in the implementation of optimal classifiers is the selection of features that characterize the image. Basically, in this study, image processing and pattern classification are going to be used to implement a machine vision system that could identify and classify the visual symptoms of herb plants diseases. The image processing is divided into four stages: Image Pre-Processing to remove image noises (Fixed-Valued Impulse Noise, Random-Valued Impulse Noise and Gaussian Noise), Image Segmentation to identify regions in the image that were likely to qualify as diseased region, Image Feature Extraction and Selection to extract and select important image features and Image Classification to classify the image into different herbs diseases classes. This paper is to propose an unsupervised diseases pattern recognition and classification algorithm that is based on a modified Hierarchical Dynamic Artificial Neural Network which provides an adjustable sensitivity-specificity herbs diseases detection and classification from the analysis of noise-free colored herbs images. It is also to proposed diseases treatment algorithm that is capable to provide a suitable treatment and control for each identified herbs diseases

    Extracting and integrating multimodality features via multidimensional approach for video retrieval

    Get PDF
    This work discusses the application of an Artificial Intelligence technique called data extraction and a process-based ontology in constructing experimental qualitative models for video retrieval and detection. We present a framework architecture that uses multimodality features as the knowledge representation scheme to model the behaviors of a number of human actions in the video scenes. The main focus of this paper placed on the design of two main components (model classifier and inference engine) for a tool abbreviated as VSAD (Video Action Scene Detector) for retrieving and detecting human actions from video scenes. The discussion starts by presenting the workflow of the retrieving and detection process and the automated model classifier construction logic. We then move on to demonstrate how the constructed classifiers can be used with multimodality features for detecting human actions. Finally, behavioral explanation manifestation is discussed. The simulator is implemented in bilingual; Math Lab and C++ are at the backend supplying data and theories while Java handles all front-end GUI and action pattern updating

    Twelve anchor points detection by direct point calculation

    Get PDF
    Facial features can be categorized it into three approaches; Region Approaches, Anchor Point (landmark) Approaches and Contour Approaches. Generally, anchor points approach provide more accurate and consistent representation. For this reason, anchor points approach has been chose to utilize. Although, as the experiment data sets have become larger, algorithms have become more sophisticated even if the reported recognition rates are not as high as in some earlier works. This will cause a higher complexity and computer burden. Indirectly, it also will affect the time for real time face recognition systems. Here, it is proposed the approach of calculating the points directly from the text file to detect twelve anchor points ( nose tip, mouth centre, right eye centre, left eye centre, upper nose and chin). In order to get the anchor points, points for the nose tip have to be detected first. Then the upper nose and face point is localization. Lastly, the outer and inner eyes corner is localized. An experiment has been carried out with 420 models taken from GavabDB in two positions with frontal view and variation of expressions and positions. Our results are compared with three researchers that is similar to and show that better result is obtained with a median error of the eight points is around 5.53mm

    Association of Helicobacter pylori with colorectal cancer development.

    Get PDF
    Background: Helicobacter pylori (H. pylori) may be associated with colorectal cancer. However, the underlying mechanisms are still unclear. Objectives: Explore the serostatus of H. pylori cytotoxicity-associated gene A product (CagA) in patients with colorectal carcinoma, and assess the association of H. pylori with colorectal cancer via c-Myc and MUC-2 proteins at tumor tissues. Methods: H. pylori CagA IgG antibodies were screened using enzyme-linked immunosorbent assay (ELISA) in 30 patients with colorectal carcinoma and 30 cancer-free control subjects. Paraffin-embedded blocks were examined for the expression of c-Myc and MUC-2 protein by immunohistochemistry. Results: H. pylori CagA seropositivity increased significantly among colorectal cancer patients (p <0.05). The expression of c-Myc and MUC-2 in colorectal carcinoma patients was over-expressed (80%), and downexpressed (63%) in resection margins (p <0.05). c-Myc over-expression and MUC-2 down-expression were associated with CagA-positive rather than CagA-negative H. pylori patients. In 16 CagA seropositive vs. 14 CagA seronegative patients, the expression rate was 97.3% vs. 64.2% and 33.3% vs. 78.5% for cMyc and MUC-2, respectively. CagA IgG level was significantly higher in positive than in negative c-Myc patients (p= 0.036), and in negative than in positive MUC-2 patients (p= 0.044). c-Myc and MUC-2 were positively and inversely correlated with CagA IgG level (p <0.05). Conclusions: CagA-seropositive H. pylori is most probably associated with colorectal cancer development. Part of the underlying mechanism for such association might be via alterations in expression of MUC-2, which depletes the mucous protective layer in the colo-rectum, and c-Myc, which stimulates the growth of cancerous cells

    A study on surveillance video abstraction techniques

    Get PDF
    The goal of surveillance video abstraction is to generate a video abstract that includes important events and object by eliminating the redundant frames, lacking from activity in original video. Although many research and progresses have been done in video abstraction, the developed approaches either fail to accurately and effectively cover the overall visual content of video or they are computationally expensive in term of time or process. In this paper, firstly we critically review the applicable video abstraction techniques in surveillance domain based on our hierarchical classification, and then briefly introduce a new approach for generating a static surveillance video abstraction, which mitigate the drawbacks of reviewed approaches

    Object detection and representation method for surveillance video indexing

    Get PDF
    The huge volume of videos produced by surveillance cameras has increased the demand for the fast and effective video surveillance indexing and retrieval systems. Although environmental condition such as light reflection, illumination changes, shadow, and occlusion can affect the indexing and retrieval result of any video surveillance system, nevertheless the use of reliable and robust object (blob) detection and representation methods can improve the performance of the system. This paper presents a video indexing module, which is part of a video surveillance indexing and retrieval framework, to overcome the above challenges. The proposed video indexing module is composed of seven components: background modeling, foreground extraction, blob detection, blob analysis, feature extraction, blob representation, and blob indexing. The experimental results showed that the selection of appropriate blob detection method could improve the performance of the system. Moreover, the experiments also demonstrated that the functionality of the proposed blob representation method was able to prevent the processing of redundant blobs' information

    Speeded up surveillance video indexing and retrieval using abstraction

    Get PDF
    Many researches have been conducted on video abstraction for quick viewing of video archives, however there is a lack of approach that considers abstraction as a pre-processing stage in video analysis. This paper aims to investigate the efficiency of integrating video abstraction in surveillance video indexing and retrieval framework. The basic idea is to reduce the computational complexity and cost of overall processes by using the abstract version of the original video that excludes unnecessary and redundant information. The experimental results show a significant reduction of 87% in computational cost by using the abstract video rather than the original video in both indexing and retrieval processes
    corecore